Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 81: 102554, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35921712

RESUMEN

Hepatocellular Carcinoma (HCC) detection, size grading, and quantification (i.e. the center point coordinates, max-diameter, and area) by using multi-modality magnetic resonance imaging (MRI) are clinically significant tasks for HCC assessment and treatment. However, delivering the three tasks simultaneously is extremely challenging due to: (1) the lack of effective an mechanism to capture the relevance among multi-modality MRI information for multi-modality feature fusion and selection; (2) the lack of effective mechanism and constraint strategy to achieve mutual promotion of multi-task. In this paper, we proposed a task relevance driven adversarial learning framework (TrdAL) for simultaneous HCC detection, size grading, and multi-index quantification using multi-modality MRI (i.e. in-phase, out-phase, T2FS, and DWI). The TrdAL first obtains expressive feature of dimension reduction via using a CNN-based encoder. Secondly, the proposed modality-aware Transformer is utilized for multi-modality MRI features fusion and selection, which solves the challenge of multi-modality information diversity via capturing the relevance among multi-modality MRI. Then, the innovative task relevance driven and radiomics guided discriminator (Trd-Rg-D) is used for united adversarial learning. The Trd-Rg-D captures the internal high-order relationships to refine the performance of multi-task simultaneously. Moreover, adding the radiomics feature as the prior knowledge into Trd-Rg-D enhances the detailed feature extraction. Lastly, a novel task interaction loss function is used for constraining the TrdAL, which enforces the higher-order consistency among multi-task labels to enhance mutual promotion. The TrdAL is validated on a corresponding multi-modality MRI of 135 subjects. The experiments demonstrate that TrdAL achieves high accuracy of (1) HCC detection: specificity of 93.71%, sensitivity of 93.15%, accuracy of 93.33%, and IoU of 82.93%; (2) size grading: accuracy of large size, medium size, small size, tiny size, and healthy subject are 90.38%, 87.74%, 80.68%, 77.78%, and 96.87%; (3) multi-index quantification: the mean absolute error of center point, max-diameter, and area are 2.74mm, 3.17mm, and 144.51mm2. All of these results indicate that the proposed TrdAL provides an efficient, accurate, and reliable tool for HCC diagnosis in clinical.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/patología , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Imagen por Resonancia Magnética/métodos
2.
Comput Math Methods Med ; 2022: 1248311, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35309832

RESUMEN

As there is no contrast enhancement, the liver tumor area in nonenhanced MRI exists with blurred edges and low contrast, which greatly affects the speed and accuracy of liver tumor diagnosis. As a result, precise segmentation of liver tumor from nonenhanced MRI has become an urgent and challenging task. In this paper, we propose an edge constraint and localization mapping segmentation model (ECLMS) to accurately segment liver tumor from nonenhanced MRI. It consists of two parts: localization network and dual-branch segmentation network. We build the localization network, which generates prior coarse masks to provide position mapping for the segmentation network. This part enhances the ability of the model to localize liver tumor in nonenhanced images. We design a dual-branch segmentation network, where the main decoding branch focuses on the feature representation in the core region of the tumor and the edge decoding branch concentrates on capturing the edge information of the tumor. To improve the ability of the model for capturing detailed features, sSE blocks and dense upward connections are introduced into it. We design the bottleneck multiscale module to construct multiscale feature representations using kernels of different sizes while integrating the location mapping of tumor. The ECLMS model is evaluated on a private nonenhanced MRI dataset that comprises 215 different subjects. The model achieves the best Dice coefficient, precision, and accuracy of 90.23%, 92.25%, and 92.39%, correspondingly. The effectiveness of our model is demonstrated by experiment results, and our model reaches superior results in the segmentation task of nonenhanced liver tumor compared to existing segmentation methods.


Asunto(s)
Interpretación de Imagen Asistida por Computador/estadística & datos numéricos , Neoplasias Hepáticas/diagnóstico por imagen , Imagen por Resonancia Magnética/estadística & datos numéricos , Carcinoma Hepatocelular/diagnóstico por imagen , Biología Computacional , Bases de Datos Factuales/estadística & datos numéricos , Hemangioma/diagnóstico por imagen , Humanos , Aumento de la Imagen/métodos , Redes Neurales de la Computación
3.
Med Image Anal ; 73: 102154, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34280670

RESUMEN

Simultaneous segmentation and detection of liver tumors (hemangioma and hepatocellular carcinoma (HCC)) by using multi-modality non-contrast magnetic resonance imaging (NCMRI) are crucial for the clinical diagnosis. However, it is still a challenging task due to: (1) the HCC information on NCMRI is insufficient makes extraction of liver tumors feature difficult; (2) diverse imaging characteristics in multi-modality NCMRI causes feature fusion and selection difficult; (3) no specific information between hemangioma and HCC on NCMRI cause liver tumors detection difficult. In this study, we propose a united adversarial learning framework (UAL) for simultaneous liver tumors segmentation and detection using multi-modality NCMRI. The UAL first utilizes a multi-view aware encoder to extract multi-modality NCMRI information for liver tumor segmentation and detection. In this encoder, a novel edge dissimilarity feature pyramid module is designed to facilitate the complementary multi-modality feature extraction. Secondly, the newly designed fusion and selection channel is used to fuse the multi-modality feature and make the decision of the feature selection. Then, the proposed mechanism of coordinate sharing with padding integrates the multi-task of segmentation and detection so that it enables multi-task to perform united adversarial learning in one discriminator. Lastly, an innovative multi-phase radiomics guided discriminator exploits the clear and specific tumor information to improve the multi-task performance via the adversarial learning strategy. The UAL is validated in corresponding multi-modality NCMRI (i.e. T1FS pre-contrast MRI, T2FS MRI, and DWI) and three phases contrast-enhanced MRI of 255 clinical subjects. The experiments show that UAL gains high performance with the dice similarity coefficient of 83.63%, the pixel accuracy of 97.75%, the intersection-over-union of 81.30%, the sensitivity of 92.13%, the specificity of 93.75%, and the detection accuracy of 92.94%, which demonstrate that UAL has great potential in the clinical diagnosis of liver tumors.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas/diagnóstico por imagen , Imagen por Resonancia Magnética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...